FORBID: Fast Overlap Removal by Stochastic GradIent Descent for Graph Drawing
نویسندگان
چکیده
While many graph drawing algorithms consider nodes as points, visualization tools often represent them shapes. These shapes support the display of information such labels or encode various data with size color. However, they can create overlaps between which hinder exploration process by hiding parts information. It is therefore utmost importance to remove these improve readability. If not handled layout process, Overlap Removal (OR) have been proposed post-processing. As layouts usually convey about their topology, it important that OR preserve much possible. We propose a novel algorithm models joint stress and scaling optimization problem, leverages efficient stochastic gradient descent. This approach compared state-of-the-art algorithms, several quality metrics demonstrate its efficiency quickly while retaining initial structures.
منابع مشابه
Fast Node Overlap Removal in Graph Layout
Most graph layout algorithms in the field of graph drawing treat nodes as points. The problem of node overlap removal is to adjust the layout generated by such methods so that nodes of non-zero width and height do not overlap, yet are as close as possible to their original positions. We give an O(n logn) algorithm for achieving this assuming that the number of nodes overlapping any single node ...
متن کاملA Fast Distributed Stochastic Gradient Descent Algorithm for Matrix Factorization
The accuracy and effectiveness of matrix factorization technique were well demonstrated in the Netflix movie recommendation contest. Among the numerous solutions for matrix factorization, Stochastic Gradient Descent (SGD) is one of the most widely used algorithms. However, as a sequential approach, SGD algorithm cannot directly be used in the Distributed Cluster Environment (DCE). In this paper...
متن کاملVariational Stochastic Gradient Descent
In Bayesian approach to probabilistic modeling of data we select a model for probabilities of data that depends on a continuous vector of parameters. For a given data set Bayesian theorem gives a probability distribution of the model parameters. Then the inference of outcomes and probabilities of new data could be found by averaging over the parameter distribution of the model, which is an intr...
متن کاملByzantine Stochastic Gradient Descent
This paper studies the problem of distributed stochastic optimization in an adversarial setting where, out of the m machines which allegedly compute stochastic gradients every iteration, an α-fraction are Byzantine, and can behave arbitrarily and adversarially. Our main result is a variant of stochastic gradient descent (SGD) which finds ε-approximate minimizers of convex functions in T = Õ ( 1...
متن کاملParallelized Stochastic Gradient Descent
With the increase in available data parallel machine learning has become an in-creasingly pressing problem. In this paper we present the first parallel stochasticgradient descent algorithm including a detailed analysis and experimental evi-dence. Unlike prior work on parallel optimization algorithms [5, 7] our variantcomes with parallel acceleration guarantees and it poses n...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Lecture Notes in Computer Science
سال: 2023
ISSN: ['1611-3349', '0302-9743']
DOI: https://doi.org/10.1007/978-3-031-22203-0_6